Fully Spiking Variational Autoencoder
نویسندگان
چکیده
Spiking neural networks (SNNs) can be run on neuromorphic devices with ultra-high speed and ultra-low energy consumption because of their binary event-driven nature. Therefore, SNNs are expected to have various applications, including as generative models being running edge create high-quality images. In this study, we build a variational autoencoder (VAE) SNN enable image generation. VAE is known for its stability among models; recently, quality advanced. vanilla VAE, the latent space represented normal distribution, floating-point calculations required in sampling. However, not possible all features must time series data. constructed an autoregressive model, randomly selected samples from output sample variables. This allows variables follow Bernoulli process learning. Thus, Fully Variational Autoencoder where modules SNN. To best our knowledge, first only layers. We experimented several datasets, confirmed that it generate images same or better compared conventional ANNs. The code available at https://github.com/kamata1729/FullySpikingVAE.
منابع مشابه
Variational Lossy Autoencoder
Representation learning seeks to expose certain aspects of observed data in a learned representation that’s amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representati...
متن کاملQuantum Variational Autoencoder
Variational autoencoders (VAEs) are powerful generative models with the salient ability to perform inference. Here, we introduce a quantum variational autoencoder (QVAE): a VAE whose latent generative process is implemented as a quantum Boltzmann machine (QBM). We show that our model can be trained end-to-end by maximizing a well-defined loss-function: a “quantum” lowerbound to a variational ap...
متن کاملEpitomic Variational Autoencoder
In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in varia...
متن کاملAdversarial Symmetric Variational Autoencoder
A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and late...
متن کاملGrammar Variational Autoencoder
Deep generative models have been wildly successful at learning coherent latent representations for continuous data such as natural images, artwork, and audio. However, generative modeling of discrete data such as arithmetic expressions and molecular structures still poses significant challenges. Crucially, state-of-the-art methods often produce outputs that are not valid. We make the key observ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i6.20665